You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Upgrade of kubernetes cluster reports as successful on a shared network, when it doesn't successfully upgrade the nodes
Types of changes
Breaking change (fix or feature that would cause existing functionality to change)
New feature (non-breaking change which adds functionality)
Bug fix (non-breaking change which fixes an issue)
Enhancement (improves an existing feature and functionality)
Cleanup (Code refactoring and cleanup, that may add test cases)
Screenshots (if appropriate):
How Has This Been Tested?
Prior to the fix:
Deployed a 1.16.0 k8s cluster:
k8s-master ~ # kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready,SchedulingDisabled master 6m23s v1.16.0
k8s-node-1 Ready <none> 5m51s v1.16.0
Upgraded the cluster to 1.16.3, though the upgradeKubernetesCluster API returns a successful response, it doesn't upgrade the worker nodes:
k8s-master ~ # kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 8m35s v1.16.3
k8s-node-1 Ready <none> 8m3s v1.16.0
Post fix, once the upgrade completes i.e., when the upgradeKubernetesCluster API returns a success response, kubectl get nodes shows all nodes to be at the upgraded version, here, 1.16.3
c2-master ~ # kubectl get nodes
NAME STATUS ROLES AGE VERSION
c2-master Ready master 18m v1.16.0
c2-node-1 Ready <none> 18m v1.16.0
c2-master ~ # kubectl get nodes
NAME STATUS ROLES AGE VERSION
c2-master Ready,SchedulingDisabled master 19m v1.16.0
c2-node-1 Ready <none> 18m v1.16.0
...
c2-master ~ # kubectl get nodes
NAME STATUS ROLES AGE VERSION
c2-master Ready master 25m v1.16.3
c2-node-1 NotReady <none> 25m v1.16.3
Trillian test result (tid-3208)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 30505 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr4458-t3208-kvm-centos7.zip
Intermittent failure detected: /marvin/tests/smoke/test_diagnostics.py
Smoke tests completed. 83 look OK, 0 have error(s)
Only failed tests results shown below:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Upgrade of kubernetes cluster reports as successful on a shared network, when it doesn't successfully upgrade the nodes
Types of changes
Screenshots (if appropriate):
How Has This Been Tested?
Prior to the fix:
Deployed a 1.16.0 k8s cluster:
Upgraded the cluster to 1.16.3, though the upgradeKubernetesCluster API returns a successful response, it doesn't upgrade the worker nodes:
Post fix, once the upgrade completes i.e., when the upgradeKubernetesCluster API returns a success response,
kubectl get nodesshows all nodes to be at the upgraded version, here, 1.16.3